-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ship logs to http endpoint #1228
Conversation
James-Pickett
commented
Jun 13, 2023
•
edited
Loading
edited
- adds a new logger "LogShipper" that gets added via TeeLogger at launcher startup
- LogShipper ship logs on an interval using endpoint and token provided by control server or flags
- updates tracing code to use "TraceIngestURL" flag and LogShipper to use "LogIngestUrl"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks pretty clean, I think it'll be easy to play with.
I've been stalling on #1183 but I think it would be easy to integrate this with it
Don’t use the db. Memory is fine. But the buffer and flush a batch makes
sense.
…On Wed, Jun 14, 2023 at 09:08 James Pickett ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In pkg/log/httpbuffer/httpbuffer.go
<#1228 (comment)>:
> +}
+
+func (hb *httpBuffer) Write(p []byte) (n int, err error) {
+ hb.bufMutex.Lock()
+ defer hb.bufMutex.Unlock()
+
+ if hb.buf.Len() > hb.maxSize {
+ // just drop our logs on the floor and start over
+ hb.logger.Log("msg", "buffer is full, dropping logs", "size", hb.buf.Len())
+ hb.buf.Reset()
+ }
+
+ n, err = hb.buf.Write(p)
+
+ // only try to send if we're at send size and not currently sending
+ if hb.buf.Len() >= hb.sendSize && hb.sendMutex.TryLock() {
In that code it looks like we write to the db, then periodically flush and
purge. Are you thinking we should also back the http sending logger with
the db or is in memory okay? I guess the advantage of being db backed is
that it would persist through crashes ... starting to talk myself into it.
—
Reply to this email directly, view it on GitHub
<#1228 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABL2VUHXYXIJL7JSPKUQQTXLHOZBANCNFSM6AAAAAAZFOJNXQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(early comments)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'm okay with it. But I have some questions about how this works if we merge it today. (it should be disabled unless the control server sets a target host, or we push a build with a default target host)
cmd/launcher/launcher.go
Outdated
@@ -148,8 +148,9 @@ func runLauncher(ctx context.Context, cancel func(), opts *launcher.Options) err | |||
|
|||
// Need to set up the log shipper so that we can get the logger early | |||
// and pass it to the various systems. | |||
isShippingLogs := k.ObservabilityIngestServerURL() != "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably okay to start with it. Interesting question is whether it's possible to enable/disable/change the URL from the control server
pkg/log/logshipper/logshipper.go
Outdated
if !ls.knapsack.LogShippingEnabled() { | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cannot remember -- how performant is this? Vs storing a boolean on logshipper and registering it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the values are cached, but just to be safe, I'll store the boolean on log shipper
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't make it through the sendbuffer, will come back and review that later
Edit -- sendbuffer looked fine to me, I didn't have any additional comments there!
pkg/log/logshipper/logshipper.go
Outdated
token, _ := ls.knapsack.TokenStore().Get(observabilityIngestTokenKey) | ||
ls.sender.authtoken = string(token) | ||
|
||
endpoint, err := logEndpoint(ls.knapsack) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't you need to implement FlagsChanged(flagKeys ...keys.FlagKey)
and call knapsack.RegisterChangeObserver
to get agent flags (i.e. LogIngestServerURL
) updates on time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kinda took the lazy rout and just subscribed to all agent_flags updates, felt easier to just handle all the control server updates in Ping(), however if I'm willing to change it if you think it would be cleaner.
https://github.com/James-Pickett/launcher/blob/james/log-shipping/cmd/launcher/launcher.go#L324
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I have a slight preference for doing it consistently the same way everywhere. But maybe the best way is to do everything in Ping
like you've got it here? Maybe can chat about it later, I'm fine with this as-is regardless
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right that we should come up with some patterns for doing this consistently. But I don't know what it'll be yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's try!